Open
Conversation
Co-authored-by: waldekmastykarz <11164679+waldekmastykarz@users.noreply.github.com>
…e ID prefix Co-authored-by: waldekmastykarz <11164679+waldekmastykarz@users.noreply.github.com>
Co-authored-by: waldekmastykarz <11164679+waldekmastykarz@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Add support for OpenAI Responses API
Add support for OpenAI Responses API
Jan 13, 2026
Contributor
There was a problem hiding this comment.
Pull request overview
This pull request adds support for the OpenAI Responses API (/v1/responses) to complement the existing Chat Completions API support. The Responses API is OpenAI's recommended approach for new projects. The implementation extends all relevant OpenAI-related plugins to handle both API formats while maintaining backward compatibility.
Changes:
- Extended OpenAI model classes to support Responses API request/response structures with property aliases for token usage fields
- Added
TryGetCompletionLikeRequesthelper method for plugins that only handle text generation APIs - Updated four plugins (LanguageModelFailurePlugin, LanguageModelRateLimitingPlugin, OpenAITelemetryPlugin, OpenAIMockResponsePlugin) to recognize and process Responses API requests
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| DevProxy.Abstractions/LanguageModel/OpenAIModels.cs | Adds Responses API model classes, detection logic in TryGetOpenAIRequest, new TryGetCompletionLikeRequest helper, and property aliases in OpenAIResponseUsage for token field mapping |
| DevProxy.Plugins/Behavior/LanguageModelFailurePlugin.cs | Adds fault injection support for Responses API by appending fault prompts to the input array |
| DevProxy.Plugins/Behavior/LanguageModelRateLimitingPlugin.cs | Switches to TryGetCompletionLikeRequest to include Responses API in rate limit tracking |
| DevProxy.Plugins/Inspection/OpenAITelemetryPlugin.cs | Adds telemetry tags/metrics collection for Responses API operations including request parameters and response status |
| DevProxy.Plugins/Mocking/OpenAIMockResponsePlugin.cs | Implements Responses API mocking by converting between Responses format and Chat Completion format for the local LM |
- Extract shared IsResponsesApiRequest helper to deduplicate detection logic - Remove unused OpenAIResponsesUsage class - Add missing base properties in LanguageModelFailurePlugin Responses API branch - Fix complex content formatting in OpenAITelemetryPlugin
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
OpenAI recommends the Responses API (
/v1/responses) over Chat Completions for new projects. This adds support for it across all OpenAI-related plugins while preserving existing Chat Completions behavior.The Responses API uses
input(array of role/content objects) instead ofmessages, and returnsoutputarray with structured content. Token usage fields differ:input_tokens/output_tokensvsprompt_tokens/completion_tokens.Changes
OpenAIModels.cs
OpenAIResponsesRequest,OpenAIResponsesResponse, and supporting typesTryGetOpenAIRequestto detect Responses API by checking forinputarray with objects containingrole/typeTryGetCompletionLikeRequestshared method for plugins that only handle text generationOpenAIResponseUsageto deserialize both token naming conventions via property aliasesPlugin Updates
LanguageModelFailurePlugin: Injects fault prompts into Responses APIinputarrayLanguageModelRateLimitingPlugin: Recognizes Responses API for rate limit trackingOpenAITelemetryPlugin: Adds telemetry tags/metrics for Responses API operationsOpenAIMockResponsePlugin: Mocks Responses API by converting to/from chat completion formatExample Request Detection
Warning
Firewall rules blocked me from connecting to one or more addresses (expand for details)
I tried to connect to the following addresses, but was blocked by firewall rules:
platform.openai.com/home/REDACTED/work/_temp/ghcca-node/node/bin/node /home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps /home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js(dns block)If you need me to access, download, or install something from one of these locations, you can either:
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.